39 research outputs found

    Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation

    Get PDF
    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.This research was supported by the Vision and Eye Research Unit, Postgraduate Medical Institute at Anglia Ruskin University (awarded to SP), and the Medical Research Council (awarded to BCJM, Grant number G0701870)

    Auditory spatial representations of the world are compressed in blind humans

    Get PDF
    Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources, and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals

    A low-cost 2-D video system can accurately and reliably assess adaptive gait kinematics in healthy and low vision subjects

    Get PDF
    3-D gait analysis is the gold standard but many healthcare clinics and research institutes would benefit from a system that is inexpensive and simple but just as accurate. The present study examines whether a low-cost 2-D motion capture system can accurately and reliably assess adaptive gait kinematics in subjects with central vision loss, older controls, and younger controls. Subjects were requested to walk up and step over a 10 cm high obstacle that was positioned in the middle of a 4.5 m walkway. Four trials were simultaneously recorded with the Vicon motion capture system (3-D system) and a video camera that was positioned perpendicular to the obstacle (2-D system). The kinematic parameters (crossing height, crossing velocity, foot placement, single support time) were calculated offline. Strong Pearson's correlations were found between the two systems for all parameters (average r = 0.944, all p < 0.001). Bland-Altman analysis showed that the agreement between the two systems was good in all three groups after correcting for systematic biases related to the 2-D marker positions. The test-retest reliability for both systems was high (average ICC = 0.959). These results show that a low-cost 2-D video system can reliably and accurately assess adaptive gait kinematics in healthy and low vision subjects

    Sensory substitution information informs locomotor adjustments when walking through apertures

    Get PDF
    The study assessed the ability of the central nervous system (CNS) to use echoic information from sensory substitution devices (SSDs) to rotate the shoulders and safely pass through apertures of different width. Ten visually normal participants performed this task with full vision, or blindfolded using an SSD to obtain information regarding the width of an aperture created by two parallel panels. Two SSDs were tested. Participants passed through apertures of +0%, +18%, +35%, and +70% of measured body width. Kinematic indices recorded movement time, shoulder rotation, average walking velocity across the trial, peak walking velocities before crossing, after crossing and throughout a whole trial. Analyses showed participants used SSD information to regulate shoulder rotation, with greater rotation associated with narrower apertures. Rotations made using an SSD were greater compared to vision, movement times were longer, average walking velocity lower and peak velocities before crossing, after crossing and throughout the whole trial were smaller, suggesting greater caution. Collisions sometimes occurred using an SSD but not using vision, indicating that substituted information did not always result in accurate shoulder rotation judgements. No differences were found between the two SSDs. The data suggest that spatial information, provided by sensory substitution, allows the relative position of aperture panels to be internally represented, enabling the CNS to modify shoulder rotation according to aperture width. Increased buffer space indicated by greater rotations (up to approximately 35% for apertures of +18% of body width), suggests that spatial representations are not as accurate as offered by full vision

    Partial Visual Loss Affects Self-reports of Hearing Abilities Measured Using a Modified Version of the Speech, Spatial, and Qualities of Hearing Questionnaire

    Get PDF
    We assessed how visually impaired (VI) people perceived their own auditory abilities using an established hearing questionnaire, the Speech, Spatial, and Qualities of Hearing Scale (SSQ), that was adapted to make it relevant and applicable to VI individuals by removing references to visual aspects while retaining the meaning of the original questions. The resulting questionnaire, the SSQvi, assessed perceived hearing ability in diverse situations including the ability to follow conversations with multiple speakers, assessing how far away a vehicle is, and the ability to perceptually segregate simultaneous sounds. The SSQvi was administered to 33 VI and 33 normally sighted participants. All participants had normal hearing or mild hearing loss, and all VI participants had some residual visual ability. VI participants gave significantly higher (better) scores than sighted participants for: (i) one speech question, indicating less difficulty in following a conversation that switches from one person to another, (ii) one spatial question, indicating less difficulty in localizing several talkers, (iii) three qualities questions, indicating less difficulty with segregating speech from music, hearing music more clearly, and better speech intelligibility in a car. These findings are consistent with the perceptual enhancement hypothesis, that certain auditory abilities are improved to help compensate for loss of vision, and show that full visual loss is not necessary for perceived changes in auditory ability to occur for a range of auditory situations. For all other questions, scores were not significantly different between the two groups. Questions related to effort, concentration, and ignoring distracting sounds were rated as most difficult for VI participants, as were situations involving divided-attention contexts with multiple streams of speech, following conversations in noise and in echoic environments, judging elevation or distance, and externalizing sounds. The questionnaire has potential clinical applications in assessing the success of clinical interventions and setting more realistic goals for intervention for those with auditory and/or visual losses. The results contribute toward providing benchmark scores for VI individuals.The research was supported by the Vision and Eye Research Unit (VERU), Postgraduate Medical Institute at Anglia Ruskin University, and MRC grant G0701870

    Egocentric and allocentric representations in auditory cortex

    Get PDF
    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position
    corecore